Goto

Collaborating Authors

 unknown vector



Cone-Constrained Principal Component Analysis

Neural Information Processing Systems

Estimating a vector from noisy quadratic observations is a task that arises naturally in many contexts, from dimensionality reduction, to synchronization and phase retrieval problems. It is often the case that additional information is available about the unknown vector (for instance, sparsity, sign or magnitude of its entries). Many authors propose non-convex quadratic optimization problems that aim at exploiting optimally this information. However, solving these problems is typically NP-hard. We consider a simple model for noisy quadratic observation of an unknown vector $\bvz$.


Global Guarantees for Blind Demodulation with Generative Priors

Neural Information Processing Systems

We study a deep learning inspired formulation for the blind demodulation problem, which is the task of recovering two unknown vectors from their entrywise multiplication.


Support Recovery of Sparse Signals from a Mixture of Linear Measurements

Neural Information Processing Systems

Recovery of support of a sparse vector from simple measurements is a widely studied problem, considered under the frameworks of compressed sensing, 1-bit compressed sensing, and more general single index models. We consider generalizations of this problem: mixtures of linear regressions, and mixtures of linear classifiers, where the goal is to recover supports of multiple sparse vectors using only a small number of possibly noisy linear, and 1-bit measurements respectively. The key challenge is that the measurements from different vectors are randomly mixed. Both of these problems have also received attention recently. In mixtures of linear classifiers, an observation corresponds to the side of the queried hyperplane a random unknown vector lies in; whereas in mixtures of linear regressions we observe the projection of a random unknown vector on the queried hyperplane. The primary step in recovering the unknown vectors from the mixture is to first identify the support of all the individual component vectors. In this work, we study the number of measurements sufficient for recovering the supports of all the component vectors in a mixture in both these models. We provide algorithms that use a number of measurements polynomial in $k, \log n$ and quasi-polynomial in $\ell$, to recover the support of all the $\ell$ unknown vectors in the mixture with high probability when each individual component is a $k$-sparse $n$-dimensional vector.


Iterative Least Trimmed Squares for Mixed Linear Regression

Yanyao Shen, Sujay Sanghavi

Neural Information Processing Systems

Our objective is again to recover all (or some, or one) of them from the samples. In this paper, we consider MLR with the additional presence of corruptions - i.e. adversarial additive errors in the





mixture-main

Soumyabrata

Neural Information Processing Systems

The proof of the lemma follows from a simple application of Chernoff bound. Consider a matrix G of size m n where each entry is generated independently from a Bernoulli( p) distribution with p as a parameter. In this section, we prove the helper Lemmas 10 and 11 to compete the proof of Theorem 1 and also present the proof of Theorem 2. The two stage approximate recovery algorithm, as the name suggests, proceeds in two sequential steps. In the first stage, we recover the support of all the ` unknown vectors (presented in Algorithm 2 in Section 5). In the second stage, we use these deduced supports to approximately recover the unknown vectors (Algorithm 5 described in Section B.2). B.1 Support recovery (Missing proofs from Section 5) Compute |S ( i) | using Algorithm 3. First, we show how to compute |S ( i) | for every index i 2 [ n ] .


mixture-main

Soumyabrata

Neural Information Processing Systems

In the problem of learning a mixture of linear classifiers, the aim is to learn a collection of hyperplanes from a sequence of binary responses. Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belong to. This model provides a rich representation of heterogeneous data with categorical labels and has only been studied in some special settings. We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse. This setting is a natural generalization of the extreme quantization problem known as 1-bit compressed sensing.